For example, if I point out that the AI has good reasons not to kill us all due to it not being able to determine if it is within top level world or a simulator or within engineering test sim. It is immediately conjectured that we will still ‘lose’ something because it’ll take up some resources in space.
To back up Carl’s claim, see Outline of possible Singularity scenarios (that are not completely disastrous) and further links in the comments there. You know, I keep hoping that you’d update your evaluation of this community and especially your estimate of how much we’ve already thought about these things, but maybe it’s time for me to update...
You know, I keep hoping that you’d update your evaluation of this community and especially your estimate of how much we’ve already thought about these things, but maybe it’s time for me to update...
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
I’m not sure. Arguing with “hopeless cases” is high risk but high return (if we succeed in bringing in new blood and new insights). Arguing with other “useful commenters” perhaps marginally improves our beliefs and how we approach the problems we’re trying to solve, but much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them. I’d typically state my reasons (just in case I’m making some kind of gross error) and leave it at that if it doesn’t change their mind.
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
I think it’s good to have some natural contrarians/skeptics around who like to find flaws in whatever ideas they see. I guess I played this role somewhat back in the OB days, but less so now that I’m closer to the “inner circle”. Of course I was more careful to make sure the flaws are real flaws, and not very belligerent...
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
Maybe we’re not thinking about the same things. I’m talking about like when cousin_it or Nesov has some decision theory idea that I don’t think is particularly promising, I tend to let them work on it and either reach that conclusion themselves or obtain some undeniable result, instead of trying to talk them out of it and work on my preferred approaches. What kind of arguments are you thinking of?
I suppose I was thinking of arguments more informal than decision theory, and I suppose in the context of such informal arguments, exchanging a lot of small chunks of reasoning seems more useful than it does in the context of building decision theory models.
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
When I went on house-to-house preaching with other Jehovah’s Witnesses as a child, this was almost exactly what more experienced members told me to do when we encountered people who didn’t seem to understand that we were clearly right and only trying to warn them that the end is nigh.
I couldn’t follow that. Could you say it again in more detail?
What kinds of people would you encounter, and which were you told to spend time proselytizing? Were there many who immediately agreed with you? and was it best to give them lots of time?
It’s just that I don’t believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)
On top of that, if you fear WBEs self improving—don’t we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can’t predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there’s the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don’t think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don’t think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.
To back up Carl’s claim, see Outline of possible Singularity scenarios (that are not completely disastrous) and further links in the comments there. You know, I keep hoping that you’d update your evaluation of this community and especially your estimate of how much we’ve already thought about these things, but maybe it’s time for me to update...
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
Some of both.
I’m not sure. Arguing with “hopeless cases” is high risk but high return (if we succeed in bringing in new blood and new insights). Arguing with other “useful commenters” perhaps marginally improves our beliefs and how we approach the problems we’re trying to solve, but much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them. I’d typically state my reasons (just in case I’m making some kind of gross error) and leave it at that if it doesn’t change their mind.
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
I think it’s good to have some natural contrarians/skeptics around who like to find flaws in whatever ideas they see. I guess I played this role somewhat back in the OB days, but less so now that I’m closer to the “inner circle”. Of course I was more careful to make sure the flaws are real flaws, and not very belligerent...
Maybe we’re not thinking about the same things. I’m talking about like when cousin_it or Nesov has some decision theory idea that I don’t think is particularly promising, I tend to let them work on it and either reach that conclusion themselves or obtain some undeniable result, instead of trying to talk them out of it and work on my preferred approaches. What kind of arguments are you thinking of?
I suppose I was thinking of arguments more informal than decision theory, and I suppose in the context of such informal arguments, exchanging a lot of small chunks of reasoning seems more useful than it does in the context of building decision theory models.
When I went on house-to-house preaching with other Jehovah’s Witnesses as a child, this was almost exactly what more experienced members told me to do when we encountered people who didn’t seem to understand that we were clearly right and only trying to warn them that the end is nigh.
I couldn’t follow that. Could you say it again in more detail?
What kinds of people would you encounter, and which were you told to spend time proselytizing? Were there many who immediately agreed with you? and was it best to give them lots of time?
It’s just that I don’t believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)
On top of that, if you fear WBEs self improving—don’t we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can’t predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there’s the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don’t think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don’t think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.